89 research outputs found
Disorder-induced mechanism for positive exchange bias fields
We propose a mechanism to explain the phenomenon of positive exchange bias on
magnetic bilayered systems. The mechanism is based on the formation of a domain
wall at a disordered interface during field cooling (FC) which induces a
symmetry breaking of the antiferromagnet, without relying on any ad hoc
assumption about the coupling between the ferromagnetic (FM) and
antiferromagnetic (AFM) layers. The domain wall is a result of the disorder at
the interface between FM and AFM, which reduces the effective anisotropy in the
region. We show that the proposed mechanism explains several known experimental
facts within a single theoretical framework. This result is supported by Monte
Carlo simulations on a microscopic Heisenberg model, by micromagnetic
calculations at zero temperature and by mean field analysis of an effective
Ising like phenomenological model.Comment: 5 pages, 4 figure
Phase diagram of an Ising model for ultrathin magnetic films
We study the critical properties of a two--dimensional Ising model with
competing ferromagnetic exchange and dipolar interactions, which models an
ultra-thin magnetic film with high out--of--plane anisotropy in the monolayer
limit. In this work we present a detailed calculation of the phase
diagram, being the ratio between exchange and dipolar interactions
intensities. We compare the results of both mean field approximation and Monte
Carlo numerical simulations in the region of low values of ,
identifying the presence of a recently detected phase with nematic order in
different parts of the phase diagram, besides the well known striped and
tetragonal liquid phases. A remarkable qualitative difference between both
calculations is the absence, in this region of the Monte Carlo phase diagram,
of the temperature dependency of the equilibrium stripe width predicted by the
mean field approximation. We also detected the presence of an increasing number
of metastable striped states as the value of increases.Comment: 9 pages, 9 figure
Improving network generalization through selection of examples
In this work, we study how the selection of examples affects the learning procedure in a neural network and its relationship with the complexity of the function under study and its architecture.
We focus on three different problems: parity, addition of two number and bitshifting implemented on feed-forward Neural Networks.
For the parity problem, one of the most used problems for testing learning algorithms, we obtain the result that only the use of the whole set of examples assures global learnings. For the other two functions we show that generalization can be considerably improved with a particular selection of examples instead of a random one.Sistemas InteligentesRed de Universidades con Carreras en Informática (RedUNCI
Improving network generalization through selection of examples
In this work, we study how the selection of examples affects the learning procedure in a neural network and its relationship with the complexity of the function under study and its architecture.
We focus on three different problems: parity, addition of two number and bitshifting implemented on feed-forward Neural Networks.
For the parity problem, one of the most used problems for testing learning algorithms, we obtain the result that only the use of the whole set of examples assures global learnings. For the other two functions we show that generalization can be considerably improved with a particular selection of examples instead of a random one.Sistemas InteligentesRed de Universidades con Carreras en Informática (RedUNCI
- …